Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            The human-robot interaction (HRI) field has rec- ognized the importance of enabling robots to interact with teams. Human teams rely on effective communication for suc- cessful collaboration in time-sensitive environments. Robots can play a role in enhancing team coordination through real-time assistance. Despite significant progress in human-robot teaming research, there remains an essential gap in how robots can effectively communicate with action teams using multimodal interaction cues in time-sensitive environments. This study addresses this knowledge gap in an experimental in-lab study to investigate how multimodal robot communication in action teams affects workload and human perception of robots. We explore team collaboration in a medical training scenario where a robotic crash cart (RCC) provides verbal and non-verbal cues to help users remember to perform iterative tasks and search for supplies. Our findings show that verbal cues for object search tasks and visual cues for task reminders reduce team workload and increase perceived ease of use and perceived usefulness more effectively than a robot with no feedback. Our work contributes to multimodal interaction research in the HRI field, highlighting the need for more human-robot teaming research to understand best practices for integrating collaborative robots in time-sensitive environments such as in hospitals, search and rescue, and manufacturing applications.more » « lessFree, publicly-accessible full text available August 25, 2026
- 
            Free, publicly-accessible full text available August 25, 2026
- 
            Our bodies are constantly in motion—from the bending of arms and legs to the less conscious movement of breathing, our precise shape and location change constantly. This can make subtler developments (e.g., the growth of hair, or the healing of a wound) difficult to observe. Our work focuses on helping users record and visualize this type of subtle, longer-term change. We present a mobile tool that combines custom 3D tracking with interactive visual feedback and computational imaging to capture personal time-lapse, which approximates longer-term video of the subject (typically, part of the capturing user’s body) under a fixed viewpoint, body pose, and lighting condition. These personal time-lapses offer a powerful and detailed way to track visual changes of the subject over time. We begin with a formative study that examines what makes personal time-lapse so difficult to capture. Building on our findings, we motivate the design of our capture tool, evaluate this design with users, and demonstrate its effectiveness in a variety of challenging examples.more » « less
- 
            null (Ed.)The robotics community continually strives to create robots that are deployable in real-world environments. Often, robots are expected to interact with human groups. To achieve this goal, we introduce a new method, the Robot-Centric Group Estimation Model (RoboGEM), which enables robots to detect groups of people. Much of the work reported in the literature focuses on dyadic interactions, leaving a gap in our understanding of how to build robots that can effectively team with larger groups of people. Moreover, many current methods rely on exocentric vision, where cameras and sensors are placed externally in the environment, rather than onboard the robot. Consequently, these methods are impractical for robots in unstructured, human-centric environments, which are novel and unpredictable. Furthermore, the majority of work on group perception is supervised, which can inhibit performance in real-world settings. RoboGEM addresses these gaps by being able to predict social groups solely from an egocentric perspective using color and depth (RGB-D) data. To achieve group predictions, RoboGEM leverages joint motion and proximity estimations. We evaluated RoboGEM against a challenging, egocentric, real-world dataset where both pedestrians and the robot are in motion simultaneously, and show RoboGEM outperformed two state-of-the-art supervised methods in detection accuracy by up to 30%, with a lower miss rate. Our work will be helpful to the robotics community, and serve as a milestone to building unsupervised systems that will enable robots to work with human groups in real-world environments.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
